23 research outputs found

    A Class of Semidefinite Programs with rank-one solutions

    Get PDF
    We show that a class of semidefinite programs (SDP) admits a solution that is a positive semidefinite matrix of rank at most rr, where rr is the rank of the matrix involved in the objective function of the SDP. The optimization problems of this class are semidefinite packing problems, which are the SDP analogs to vector packing problems. Of particular interest is the case in which our result guarantees the existence of a solution of rank one: we show that the computation of this solution actually reduces to a Second Order Cone Program (SOCP). We point out an application in statistics, in the optimal design of experiments.Comment: 16 page

    Computing Optimal Designs of multiresponse Experiments reduces to Second-Order Cone Programming

    Full text link
    Elfving's Theorem is a major result in the theory of optimal experimental design, which gives a geometrical characterization of c−c-optimality. In this paper, we extend this theorem to the case of multiresponse experiments, and we show that when the number of experiments is finite, c−,A−,T−c-,A-,T- and D−D-optimal design of multiresponse experiments can be computed by Second-Order Cone Programming (SOCP). Moreover, our SOCP approach can deal with design problems in which the variable is subject to several linear constraints. We give two proofs of this generalization of Elfving's theorem. One is based on Lagrangian dualization techniques and relies on the fact that the semidefinite programming (SDP) formulation of the multiresponse c−c-optimal design always has a solution which is a matrix of rank 11. Therefore, the complexity of this problem fades. We also investigate a \emph{model robust} generalization of c−c-optimality, for which an Elfving-type theorem was established by Dette (1993). We show with the same Lagrangian approach that these model robust designs can be computed efficiently by minimizing a geometric mean under some norm constraints. Moreover, we show that the optimality conditions of this geometric programming problem yield an extension of Dette's theorem to the case of multiresponse experiments. When the number of unknown parameters is small, or when the number of linear functions of the parameters to be estimated is small, we show by numerical examples that our approach can be between 10 and 1000 times faster than the classic, state-of-the-art algorithms

    Improved Analysis of two Algorithms for Min-Weighted Sum Bin Packing

    Full text link
    We study the Min-Weighted Sum Bin Packing problem, a variant of the classical Bin Packing problem in which items have a weight, and each item induces a cost equal to its weight multiplied by the index of the bin in which it is packed. This is in fact equivalent to a batch scheduling problem that arises in many fields of applications such as appointment scheduling or warehouse logistics. We give improved lower and upper bounds on the approximation ratio of two simple algorithms for this problem. In particular, we show that the knapsack-batching algorithm, which iteratively solves knapsack problems over the set of remaining items to pack the maximal weight in the current bin, has an approximation ratio of at most 17/10

    A Case Study on Optimizing Toll Enforcements on Motorways

    Get PDF
    In this paper we present the problem of computing optimal tours of toll inspectors on German motorways. This problem is a special type of vehicle routing problem and builds up an integrated model, consisting of a tour planning and a duty rostering part. The tours should guarantee a network-wide control whose intensity is proportional to given spatial and time dependent traffic distributions. We model this using a space-time network and formulate the associated optimization problem by an integer program (IP). Since sequential approaches fail, we integrated the assignment of crews to the tours in our model. In this process all duties of a crew member must fit in a feasible roster. It is modeled as a Multi-Commodity Flow Problem in a directed acyclic graph, where specific paths correspond to feasible rosters for one month. We present computational results in a case-study on a German subnetwork which documents the practicability of our approach

    Restricted Adaptivity in Stochastic Scheduling

    Get PDF
    We consider the stochastic scheduling problem of minimizing the expected makespan on m parallel identical machines. While the (adaptive) list scheduling policy achieves an approximation ratio of 2, any (non-adaptive) fixed assignment policy has performance guarantee ?((log m)/(log log m)). Although the performance of the latter class of policies are worse, there are applications in which non-adaptive policies are desired. In this work, we introduce the two classes of ?-delay and ?-shift policies whose degree of adaptivity can be controlled by a parameter. We present a policy - belonging to both classes - which is an ?(log log m)-approximation for reasonably bounded parameters. In other words, an exponential improvement on the performance of any fixed assignment policy can be achieved when allowing a small degree of adaptivity. Moreover, we provide a matching lower bound for any ?-delay and ?-shift policy when both parameters, respectively, are in the order of the expected makespan of an optimal non-anticipatory policy

    Competitive Kill-and-Restart and Preemptive Strategies for Non-Clairvoyant Scheduling

    Full text link
    We study kill-and-restart and preemptive strategies for the fundamental scheduling problem of minimizing the sum of weighted completion times on a single machine in the non-clairvoyant setting. First, we show a lower bound of~33 for any deterministic non-clairvoyant kill-and-restart strategy. Then, we give for any b>1b > 1 a tight analysis for the natural bb-scaling kill-and-restart strategy as well as for a randomized variant of it. In particular, we show a competitive ratio of (1+33)≈6.197(1+3\sqrt{3})\approx 6.197 for the deterministic and of ≈3.032\approx 3.032 for the randomized strategy, by making use of the largest eigenvalue of a Toeplitz matrix. In addition, we show that the preemptive Weighted Shortest Elapsed Time First (WSETF) rule is 22-competitive when jobs are released online, matching the lower bound for the unit weight case with trivial release dates for any non-clairvoyant algorithm. Using this result as well as the competitiveness of round-robin for multiple machines, we prove performance guarantees smaller than 1010 for adaptions of the bb-scaling strategy to online release dates and unweighted jobs on identical parallel machines.Comment: An extended abstract occurred in the Proceedings of the 24th International Conference on Integer Programming and Combinatorial Optimizatio

    Distributionally Robust Optimal Designs

    No full text
    The optimal design of experiments for nonlinear (or generalized-linear) models can be formulated as the problem of finding a design ξ\xi maximizing a criterion Φ(ξ,θ)\Phi(\xi,\theta), where θ\theta is the unknown quantity of interest that we want to determine. Several strategies have been proposed to deal with the dependency of the optimal design on the unknown parameter θ\theta. Whenever possible, a sequential approach can be applied. Otherwise, Bayesian and Maximin approaches have been proposed. The robust maximin designs maximizes the worst-case of the criterion Φ(ξ,θ)\Phi(\xi,\theta), when θ\theta varies in a set Θ\Theta. In many cases however, such a design performs well only in a very small subset of the region Θ\Theta, so a maximin design might be far away from the optimal design for the true value of the unknown parameter. On the other hand, it has been proposed to assume that a prior for θ\theta is available, and to minimize the expected value of the criterion with respect to this prior. One objection to this approach is that when a sequential approach is not possible, we rarely have precise distributional information on the unkown parameter θ\theta. In the literature on optimization under uncertainty, the Bayesian and maximin approaches are known as "stochastic programming" and "robust optimization", respectively. A third way, somehow in between the two other paradigms, has received a lot of attention recently. The distributionally robust approach can be seen as a robust counterpart of the Bayesian approach, in which we optimize against the worst-case of all priors belonging to a family of probability distributions. In this talk, we will give equivalence theorems to characterize distributionally-robust optimal (DRO) designs. We will show that DRO-designs can be computed numerically by using semidefinite programming (SDP) or second-order cone programming (SOCP), and we will compare DRO-designs to Bayesian and maximin-optimal designs in simple cases.Non UBCUnreviewedAuthor affiliation: Technical University of BerlinPostdoctora
    corecore